Many of the next generation of adaptive optics systems on large and extremely large\ntelescopes require tomographic techniques in order to correct for atmospheric turbulence over\na large field of view. Multi-object adaptive optics is one such technique. In this paper, different\nimplementations of a tomographic reconstructor based on a machine learning architecture named\nââ?¬Å?CARMENââ?¬Â are presented. Basic concepts of adaptive optics are introduced first, with a short\nexplanation of three different control systems used on real telescopes and the sensors utilised.\nThe operation of the reconstructor, along with the three neural network frameworks used, and\nthe developed CUDA code are detailed. Changes to the size of the reconstructor influence the\ntraining and execution time of the neural network. The native CUDA code turns out to be the best\nchoice for all the systems, although some of the other frameworks offer good performance under\ncertain circumstances.
Loading....